Minimizing finite sums with the stochastic average gradient

نویسندگان
چکیده

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Minimizing Finite Sums with the Stochastic Average Gradient

We propose the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method’s iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. The conver...

متن کامل

Accelerated Stochastic Gradient Descent for Minimizing Finite Sums

We propose an optimization method for minimizing the finite sums of smooth convex functions. Our method incorporates an accelerated gradient descent (AGD) and a stochastic variance reduction gradient (SVRG) in a mini-batch setting. Unlike SVRG, our method can be directly applied to non-strongly and strongly convex problems. We show that our method achieves a lower overall complexity than the re...

متن کامل

Supplementary Materials: Accelerated Stochastic Gradient Descent for Minimizing Finite Sums

1 Proof of the Proposition 1 We now prove the Proposition 1 that gives the condition of compactness of sublevel set. Proof. Let B(r) and S(r) denote the ball and sphere of radius r, centered at the origin. By affine transformation, we can assume that X∗ contains the origin O, X∗ ⊂ B(1), and X∗ ∩ S(1) = φ. Then, we have that for ∀x ∈ S(1), (∇f(x), x) ≥ f(x)− f(O) > 0, where we use convexity for ...

متن کامل

Parallel stochastic line search methods with feedback for minimizing finite sums

We consider unconstrained minimization of a finite sum of N continuously differentiable, not necessarily convex, cost functions. Several gradientlike (and more generally, line search) methods, where the full gradient (the sum of N component costs’ gradients) at each iteration k is replaced with an inexpensive approximation based on a sub-sample Nk of the component costs’ gradients, are availabl...

متن کامل

Small Stochastic Average Gradient Steps

The stochastic average gradient (SAG) method was the first of its kind achieving linear convergence based on cheap-to-evaluate stochastic gradients. As such it established a milestone of optimization techniques for machine learning. In the few years since its inception is has spawned a large amount of related work. In this short paper we analyze the behavior of the SAG algorithm when operated i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematical Programming

سال: 2016

ISSN: 0025-5610,1436-4646

DOI: 10.1007/s10107-016-1030-6